22 research outputs found

    Prediction of dyslipidemia using gene mutations, family history of diseases and anthropometric indicators in children and adolescents: The CASPIAN-III study

    Get PDF
    Dyslipidemia, the disorder of lipoprotein metabolism resulting in high lipid profile, is an important modifiable risk factor for coronary heart diseases. It is associated with more than four million worldwide deaths per year. Half of the children with dyslipidemia have hyperlipidemia during adulthood, and its prediction and screening are thus critical. We designed a new dyslipidemia diagnosis system. The sample size of 725 subjects (age 14.66¿±¿2.61 years; 48% male; dyslipidemia prevalence of 42%) was selected by multistage random cluster sampling in Iran. Single nucleotide polymorphisms (rs1801177, rs708272, rs320, rs328, rs2066718, rs2230808, rs5880, rs5128, rs2893157, rs662799, and Apolipoprotein-E2/E3/E4), and anthropometric, life-style attributes, and family history of diseases were analyzed. A framework for classifying mixed-type data in imbalanced datasets was proposed. It included internal feature mapping and selection, re-sampling, optimized group method of data handling using convex and stochastic optimizations, a new cost function for imbalanced data and an internal validation. Its performance was assessed using hold-out and 4-foldcross-validation. Four other classifiers namely as supported vector machines, decision tree, and multilayer perceptron neural network and multiple logistic regression were also used. The average sensitivity, specificity, precision and accuracy of the proposed system were 93%, 94%, 94% and 92%, respectively in cross validation. It significantly outperformed the other classifiers and also showed excellent agreement and high correlation with the gold standard. A non-invasive economical version of the algorithm was also implemented suitable for low- and middle-income countries. It is thus a promising new tool for the prediction of dyslipidemiaPeer ReviewedPostprint (published version

    Developing Non-Laboratory Cardiovascular Risk Assessment Charts and Validating Laboratory and Non-Laboratory-Based Models.

    Get PDF
    BACKGROUND: Developing simplified risk assessment model based on non-laboratory risk factors that could determine cardiovascular risk as accurately as laboratory-based one can be valuable, particularly in developing countries where there are limited resources. OBJECTIVE: To develop a simplified non-laboratory cardiovascular disease risk assessment chart based on previously reported laboratory-based chart and evaluate internal and external validation, and recalibration of both risk models to assess the performance of risk scoring tools in other population. METHODS: A 10-year non-laboratory-based risk prediction chart was developed for fatal and non-fatal CVD using Cox Proportional Hazard regression. Data from the Isfahan Cohort Study (ICS), a population-based study among 6504 adults aged ≥ 35 years, followed-up for at least ten years was used for the non-laboratory-based model derivation. Participants were followed up until the occurrence of CVD events. Tehran Lipid and Glucose Study (TLGS) data was used to evaluate the external validity of both non-laboratory and laboratory risk assessment models in other populations rather than one used in the model derivation. RESULTS: The discrimination and calibration analysis of the non-laboratory model showed the following values of Harrell's C: 0.73 (95% CI 0.71-0.74), and Nam-D'Agostino χ2:11.01 (p = 0.27), respectively. The non-laboratory model was in agreement and classified high risk and low risk patients as accurately as the laboratory one. Both non-laboratory and laboratory risk prediction models showed good discrimination in the external validation, with Harrell's C of 0.77 (95% CI 0.75-0.78) and 0.78 (95% CI 0.76-0.79), respectively. CONCLUSIONS: Our simplified risk assessment model based on non-laboratory risk factors could determine cardiovascular risk as accurately as laboratory-based one. This approach can provide simple risk assessment tool where laboratory testing is unavailable, inconvenient, and costly

    PARS risk charts: A 10-year study of risk assessment for cardiovascular diseases in Eastern Mediterranean Region

    Get PDF
    This study was designed to develop a risk assessment chart for the clinical management and prevention of the risk of cardiovascular disease (CVD) in Iranian population, which is vital for developing national prevention programs. The Isfahan Cohort Study (ICS) is a popu- lation-based prospective study of 6504 Iranian adults 35 years old, followed-up for ten years, from 2001 to 2010. Behavioral and cardiometabolic risk factors were examined every five years, while biennial follow-ups for the occurrence of the events was performed by phone calls or by verbal autopsy. Among these participants, 5432 (2784 women, 51.3%) were CVD free at baseline examination and had at least one follow-up. Cox proportional hazard regression was used to predict the risk of ischemic CVD events, including sudden cardiac death due to unstable angina, myocardial infarction, and stroke. The model fit statis- tics such as area under the receiver-operating characteristic (AUROC), calibration chi- square and the overall bias were used to assess the model performance. We also tested the Framingham model for comparison. Seven hundred and five CVD events occurred during 49452.8 person-years of follow-up. The event probabilities were calculated and presented color-coded on each gender-specific PARS chart. The AUROC and Harrell’s C indices were 0.74 (95% CI, 0.72–0.76) and 0.73, respectively. In the calibration, the Nam-D’Ago stino ¿ 2 was 10.82 (p = 0.29). The overall bias of the proposed model was 95.60%. PARS model was also internally validated using cross-validation. The Android app and the Web-based risk assessment tool were also developed as to have an impact on public health. In compari- son, the refitted and recalibrated Framingham models, estimated the CVD incidence with the overall bias of 149.60% and 128.23% for men, and 222.70% and 176.07% for women, respectively. In conclusion, the PARS risk assessment chart is a simple, accurate, and well- calibrated tool for predicting a 10-year risk of CVD occurrence in Iranian population and can be used in an attempt to develop national guidelines for the CVD management .Peer ReviewedPostprint (published version

    Automatic classification between COVID-19 and Non-COVID-19 pneumonia using symptoms, comorbidities, and laboratory findings : the Khorshid COVID cohort study

    Get PDF
    Coronavirus disease-2019, also known as severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), was a disaster in 2020. Accurate and early diagnosis of coronavirus disease-2019 (COVID-19) is still essential for health policymaking. Reverse transcriptase-polymerase chain reaction (RT-PCR) has been performed as the operational gold standard for COVID-19 diagnosis. We aimed to design and implement a reliable COVID-19 diagnosis method to provide the risk of infection using demographics, symptoms and signs, blood markers, and family history of diseases to have excellent agreement with the results obtained by the RT-PCR and CT-scan. Our study primarily used sample data from a 1-year hospital-based prospective COVID-19 open-cohort, the Khorshid COVID Cohort (KCC) study. A sample of 634 patients with COVID-19 and 118 patients with pneumonia with similar characteristics whose RT-PCR and chest CT scan were negative (as the control group) (dataset 1) was used to design the system and for internal validation. Two other online datasets, namely, some symptoms (dataset 2) and blood tests (dataset 3), were also analyzed. A combination of one-hot encoding, stability feature selection, over-sampling, and an ensemble classifier was used. Ten-fold stratified cross-validation was performed. In addition to gender and symptom duration, signs and symptoms, blood biomarkers, and comorbidities were selected. Performance indices of the cross-validated confusion matrix for dataset 1 were as follows: sensitivity of 96% [confidence interval, CI, 95%: 94–98], specificity of 95% [90–99], positive predictive value (PPV) of 99% [98–100], negative predictive value (NPV) of 82% [76–89], diagnostic odds ratio (DOR) of 496 [198–1,245], area under the ROC (AUC) of 0.96 [0.94–0.97], Matthews Correlation Coefficient (MCC) of 0.87 [0.85–0.88], accuracy of 96% [94–98], and Cohen's Kappa of 0.86 [0.81–0.91]. The proposed algorithm showed excellent diagnosis accuracy and class-labeling agreement, and fair discriminant power. The AUC on the datasets 2 and 3 was 0.97 [0.96–0.98] and 0.92 [0.91–0.94], respectively. The most important feature was white blood cell count, shortness of breath, and C-reactive protein for datasets 1, 2, and 3, respectively. The proposed algorithm is, thus, a promising COVID-19 diagnosis method, which could be an amendment to simple blood tests and screening of symptoms. However, the RT-PCR and chest CT-scan, performed as the gold standard, are not 100% accurate.Peer ReviewedPostprint (published version

    A Comparative Study of the Neural Network, Fuzzy Logic, and Nero-fuzzy Systems in Seismic Reservoir Characterization: An Example from Arab (Surmeh) Reservoir as an Iranian Gas Field, Persian Gulf Basin

    No full text
    Intelligent reservoir characterization using seismic attributes and hydraulic flow units has a vital role in the description of oil and gas traps. The predicted model allows an accurate understanding of the reservoir quality, especially at the un-cored well location. This study was conducted in two major steps. In the first step, the survey compared different intelligent techniques to discover an optimum relationship between well logs and seismic data. For this purpose, three intelligent systems, including probabilistic neural network (PNN),fuzzy logic (FL), and adaptive neuro-fuzzy inference systems (ANFIS)were usedto predict flow zone index (FZI). Well derived FZI logs from three wells were employed to estimate intelligent models in the Arab (Surmeh) reservoir. The validation of the produced models was examined by another well. Optimal seismic attributes for the estimation of FZI include acoustic impedance, integrated absolute amplitude, and average frequency. The results revealed that the ANFIS method performed better than the other systems and showed a remarkable reduction in the measured errors. In the second part of the study, the FZI 3D model was created by using the ANFIS system.The integrated approach introduced in the current survey illustrated that the extracted flow units from intelligent models compromise well with well-logs. Based on the results obtained, the intelligent systems are powerful techniques to predict flow units from seismic data (seismic attributes) for distant well location. Finally, it was shown that ANFIS method was efficient in highlighting high and low-quality flow units in the Arab (Surmeh) reservoir, the Iranian offshore gas field

    Prediction of dyslipidemia using gene mutations, family history of diseases and anthropometric indicators in children and adolescents: The CASPIAN-III study

    No full text
    Dyslipidemia, the disorder of lipoprotein metabolism resulting in high lipid profile, is an important modifiable risk factor for coronary heart diseases. It is associated with more than four million worldwide deaths per year. Half of the children with dyslipidemia have hyperlipidemia during adulthood, and its prediction and screening are thus critical. We designed a new dyslipidemia diagnosis system. The sample size of 725 subjects (age 14.66¿±¿2.61 years; 48% male; dyslipidemia prevalence of 42%) was selected by multistage random cluster sampling in Iran. Single nucleotide polymorphisms (rs1801177, rs708272, rs320, rs328, rs2066718, rs2230808, rs5880, rs5128, rs2893157, rs662799, and Apolipoprotein-E2/E3/E4), and anthropometric, life-style attributes, and family history of diseases were analyzed. A framework for classifying mixed-type data in imbalanced datasets was proposed. It included internal feature mapping and selection, re-sampling, optimized group method of data handling using convex and stochastic optimizations, a new cost function for imbalanced data and an internal validation. Its performance was assessed using hold-out and 4-foldcross-validation. Four other classifiers namely as supported vector machines, decision tree, and multilayer perceptron neural network and multiple logistic regression were also used. The average sensitivity, specificity, precision and accuracy of the proposed system were 93%, 94%, 94% and 92%, respectively in cross validation. It significantly outperformed the other classifiers and also showed excellent agreement and high correlation with the gold standard. A non-invasive economical version of the algorithm was also implemented suitable for low- and middle-income countries. It is thus a promising new tool for the prediction of dyslipidemiaPeer Reviewe

    A Hybrid Computer-aided-diagnosis System for Prediction of Breast Cancer Recurrence (HPBCR) using optimized ensemble learning

    No full text
    Cancer is a collection of diseases that involves growing abnormal cells with the potential to invade or spread tothe body. Breast cancer is the second leading cause of cancer death among women. A method for 5-year breastcancer recurrence prediction is presented in this manuscript. Clinicopathologic characteristics of 579 breastcancer patients (recurrence prevalence of 19.3%) were analyzed and discriminative features were selectedusing statistical feature selection methods. They were further refined by Particle Swarm Optimization (PSO) asthe inputs of the classification system with ensemble learning (Bagged Decision Tree: BDT). The proper combi-nation of selected categorical features and also the weight (importance) of the selected interval-measurement-scale features were identified by the PSO algorithm. The performance of HPBCR (hybrid predictorof breast cancerrecurrence) was assessed using the holdout and 4-fold cross-validation. Three other classifiers namely as sup-ported vector machines, DT, and multilayer perceptron neural network were used for comparison. The selectedfeatures were diagnosis age, tumor size, lymph node involvement ratio, number of involved axillary lymphnodes, progesterone receptor expression,having hormone therapyand type of surgery.The minimum sensitivity,specificity, precision and accuracy of HPBCR were 77%, 93%, 95% and 85%, respectively in the entire cross-validation folds and the hold-out test fold. HPBCR outperformed the other tested classifiers. It showed excellentagreement with the gold standard (i.e. the oncologist opinion after blood tumor marker and imaging tests, andtissue biopsy). This algorithm is thus a promising online tool for the prediction of breast cancer recurrencePeer ReviewedPostprint (author's final draft

    Non-invasive decoding of the motoneurons: A guided source separation method based on convolution kernel compensation with clustered initial points

    Get PDF
    Despite the progress in understanding of neural codes, the studies of the cortico-muscular coupling still largely rely on interferential electromyographic (EMG) signal or its rectification for the assessment of motor neuron pool behavior. This assessment is non-trivial and should be used with precaution. Direct analysis of neural codes by decomposing the EMG, also known as neural decoding, is an alternative to EMG amplitude estimation. In this study, we propose a fully-deterministic hybrid surface EMG (sEMG) decomposition approach that combines the advantages of both template-based and Blind Source Separation (BSS) decomposition approaches, a.k.a. guided source separation (GSS), to identify motor unit (MU) firing patterns. We use the single-pass density-based clustering algorithm to identify possible cluster representatives in different sEMG channels. These cluster representatives are then used as initial points of modified gradient Convolution Kernel Compensation (gCKC) algorithm. Afterwards, we use the Kalman filter to reduce the noise impact and increase convergence rate of MU filter identification by gCKC. Moreover, we designed an adaptive soft-thresholding method to identify MU firing times out of estimated MU spike trains. We tested the proposed algorithm on a set of synthetic sEMG signals with known MU firing patterns. A grid of 9 × 10 monopolar surface electrodes with 5-mm inter-electrode distances in both directions was simulated. Muscle excitation was set to 10, 30, and 50%. Colored Gaussian zero-mean noise with the signal-to-noise ratio (SNR) of 10, 20, and 30 dB, respectively, was added to 16 s long sEMG signals that were sampled at 4,096 Hz. Overall, 45 simulated signals were analyzed. Our decomposition approach was compared with gCKC algorithm. Overall, in our algorithm, the average numbers of identified MUs and Rate-of-Agreement (RoA) were 16.41 ± 4.18 MUs and 84.00 ± 0.06%, respectively, whereas the gCKC identified 12.10 ± 2.32 MUs with the average RoA of 90.78 ± 0.08%. Therefore, the proposed GSS method identified more MUs than the gCKC, with comparable performance. Its performance was dependent on the signal quality but not the signal complexity at different force levels. The proposed algorithm is a promising new offline tool in clinical neurophysiologyPeer Reviewe

    Non-invasive decoding of the motoneurons: A guided source separation method based on convolution kernel compensation with clustered initial points

    No full text
    Despite the progress in understanding of neural codes, the studies of the cortico-muscular coupling still largely rely on interferential electromyographic (EMG) signal or its rectification for the assessment of motor neuron pool behavior. This assessment is non-trivial and should be used with precaution. Direct analysis of neural codes by decomposing the EMG, also known as neural decoding, is an alternative to EMG amplitude estimation. In this study, we propose a fully-deterministic hybrid surface EMG (sEMG) decomposition approach that combines the advantages of both template-based and Blind Source Separation (BSS) decomposition approaches, a.k.a. guided source separation (GSS), to identify motor unit (MU) firing patterns. We use the single-pass density-based clustering algorithm to identify possible cluster representatives in different sEMG channels. These cluster representatives are then used as initial points of modified gradient Convolution Kernel Compensation (gCKC) algorithm. Afterwards, we use the Kalman filter to reduce the noise impact and increase convergence rate of MU filter identification by gCKC. Moreover, we designed an adaptive soft-thresholding method to identify MU firing times out of estimated MU spike trains. We tested the proposed algorithm on a set of synthetic sEMG signals with known MU firing patterns. A grid of 9 × 10 monopolar surface electrodes with 5-mm inter-electrode distances in both directions was simulated. Muscle excitation was set to 10, 30, and 50%. Colored Gaussian zero-mean noise with the signal-to-noise ratio (SNR) of 10, 20, and 30 dB, respectively, was added to 16 s long sEMG signals that were sampled at 4,096 Hz. Overall, 45 simulated signals were analyzed. Our decomposition approach was compared with gCKC algorithm. Overall, in our algorithm, the average numbers of identified MUs and Rate-of-Agreement (RoA) were 16.41 ± 4.18 MUs and 84.00 ± 0.06%, respectively, whereas the gCKC identified 12.10 ± 2.32 MUs with the average RoA of 90.78 ± 0.08%. Therefore, the proposed GSS method identified more MUs than the gCKC, with comparable performance. Its performance was dependent on the signal quality but not the signal complexity at different force levels. The proposed algorithm is a promising new offline tool in clinical neurophysiologyPeer Reviewe

    Low-Cost Multispectral Sensor Array for Determining Leaf Nitrogen Status

    No full text
    A crop’s health can be determined by its leaf nutrient status; more precisely, leaf nitrogen (N) level, is a critical indicator that carries a lot of worthwhile nutrient information for classifying the plant’s health. However, the existing non-invasive techniques are expensive and bulky. The aim of this study is to develop a low-cost, quick-read multi-spectral sensor array to predict N level in leaves non-invasively. The proposed sensor module has been developed using two reflectance-based multi-spectral sensors (visible and near-infrared (NIR)). In addition, the proposed device can capture the reflectance data at 12 different wavelengths (six for each sensor). We conducted the experiment on canola leaves in a controlled greenhouse environment as well as in the field. In the greenhouse experiment, spectral data were collected from 87 leaves of 24 canola plants, subjected to varying levels of N fertilization. Later, 42 canola cultivars were subjected to low and high nitrogen levels in the field experiment. The k-nearest neighbors (KNN) algorithm was employed to model the reflectance data. The trained model shows an average accuracy of 88.4% on the test set for the greenhouse experiment and 79.2% for the field experiment. Overall, the result concludes that the proposed cost-effective sensing system can be viable in determining leaf nitrogen status
    corecore